Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
IEEE J Transl Eng Health Med ; 12: 171-181, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38088996

RESUMO

The study of emotions through the analysis of the induced physiological responses gained increasing interest in the past decades. Emotion-related studies usually employ films or video clips, but these stimuli do not give the possibility to properly separate and assess the emotional content provided by sight or hearing in terms of physiological responses. In this study we have devised an experimental protocol to elicit emotions by using, separately and jointly, pictures and sounds from the widely used International Affective Pictures System and International Affective Digital Sounds databases. We processed galvanic skin response, electrocardiogram, blood volume pulse, pupillary signal and electroencephalogram from 21 subjects to extract both autonomic and central nervous system indices to assess physiological responses in relation to three types of stimulation: auditory, visual, and auditory/visual. Results show a higher galvanic skin response to sounds compared to images. Electrocardiogram and blood volume pulse show different trends between auditory and visual stimuli. The electroencephalographic signal reveals a greater attention paid by the subjects when listening to sounds compared to watching images. In conclusion, these results suggest that emotional responses increase during auditory stimulation at both central and peripheral levels, demonstrating the importance of sounds for emotion recognition experiments and also opening the possibility toward the extension of auditory stimuli in other fields of psychophysiology. Clinical and Translational Impact Statement- These findings corroborate auditory stimuli's importance in eliciting emotions, supporting their use in studying affective responses, e.g., mood disorder diagnosis, human-machine interaction, and emotional perception in pathology.


Assuntos
Emoções , Som , Humanos , Emoções/fisiologia , Estimulação Acústica/métodos , Audição , Transtornos do Humor
2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 3710-3713, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086568

RESUMO

Emotions processing is a complex mechanism that involves different physiological systems. In particular, the Central Nervous System (CNS) is considered to play a key role in this mechanism and one of the main modalities to study the CNS activity is the Electroencephalographic signal (EEG). To elicit emotions, different kinds of stimuli can be used e.g.: audio, visual or a combination of the two. Literature studies focus more on the correct classification of the different types of emotions or which kind of stimulation gives the best performance in terms of classification accuracy. However, it is still unclear how the different stimuli elicit the emotions and which are the results in terms of brain activity. In this paper, we analysed and compared EEG signals given by eliciting emotions using audio and visual stimuli or a combination of the latter two. Data were collected during experiments conducted in our laboratories using IAPS and IADS dataset. Our study confirmed literature physiological studies about emotions highlighting higher brain activity in the frontal and central regions and in the δ and θ bands for each kind of stimulus. However, audio stimulation was found to have higher responses when compared to the other two modalities of stimulation in almost all the comparisons performed. Higher values of the δ/ß ratios, an index related to negative emotions, have been achieved when using only sounds as stimuli. Moreover, the same type of stimuli, resulted in higher δ-ß coupling, suggesting a better attention control. We concluded that stimulating subjects without letting them know (seeing) what is actually happening may give a higher perception of emotions, even if this mechanism remains highly subjective. Clinical Relevance- This paper suggests that audio stimuli may give higher perception of the elicited emotion resulting in higher brain activity in the physiological areas and more focused subjects. Thus using only audio in emotion related studies may give more reliable and consistent results.


Assuntos
Eletroencefalografia , Emoções , Sistema Nervoso Central , Eletroencefalografia/métodos , Emoções/fisiologia , Humanos
3.
Am J Audiol ; 31(3S): 961-979, 2022 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-35877954

RESUMO

PURPOSE: The aim of this study was to analyze the performance of multivariate machine learning (ML) models applied to a speech-in-noise hearing screening test and investigate the contribution of the measured features toward hearing loss detection using explainability techniques. METHOD: Seven different ML techniques, including transparent (i.e., decision tree and logistic regression) and opaque (e.g., random forest) models, were trained and evaluated on a data set including 215 tested ears (99 with hearing loss of mild degree or higher and 116 with no hearing loss). Post hoc explainability techniques were applied to highlight the role of each feature in predicting hearing loss. RESULTS: Random forest (accuracy = .85, sensitivity = .86, specificity = .85, precision = .84) performed, on average, better than decision tree (accuracy = .82, sensitivity = .84, specificity = .80, precision = .79). Support vector machine, logistic regression, and gradient boosting had similar performance as random forest. According to post hoc explainability analysis on models generated using random forest, the features with the highest relevance in predicting hearing loss were age, number and percentage of correct responses, and average reaction time, whereas the total test time had the lowest relevance. CONCLUSIONS: This study demonstrates that a multivariate approach can help detect hearing loss with satisfactory performance. Further research on a bigger sample and using more complex ML algorithms and explainability techniques is needed to fully investigate the role of input features (including additional features such as risk factors and individual responses to low-/high-frequency stimuli) in predicting hearing loss.


Assuntos
Surdez , Perda Auditiva , Algoritmos , Perda Auditiva/diagnóstico , Humanos , Aprendizado de Máquina , Ruído , Fala
4.
IEEE J Biomed Health Inform ; 25(12): 4300-4307, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34314365

RESUMO

One of the current gaps in teleaudiology is the lack of methods for adult hearing screening viable for use in individuals of unknown language and in varying environments. We have developed a novel automated speech-in-noise test that uses stimuli viable for use in non-native listeners. The test reliability has been demonstrated in laboratory settings and in uncontrolled environmental noise settings in previous studies. The aim of this study was: (i) to evaluate the ability of the test to identify hearing loss using multivariate logistic regression classifiers in a population of 148 unscreened adults and (ii) to evaluate the ear-level sound pressure levels generated by different earphones and headphones as a function of the test volume. The multivariate classifiers had sensitivity equal to 0.79 and specificity equal to 0.79 using both the full set of features extracted from the test as well as a subset of three features (speech recognition threshold, age, and number of correct responses). The analysis of the ear-level sound pressure levels showed substantial variability across transducer types and models, with earphones levels being up to 22 dB lower than those of headphones. Overall, these results suggest that the proposed approach might be viable for hearing screening in varying environments if an option to self-adjust the test volume is included and if headphones are used. Future research is needed to assess the viability of the test for screening at a distance, for example by addressing the influence of user interface, device, and settings, on a large sample of subjects with varying hearing loss.


Assuntos
Ruído , Fala , Adulto , Audição , Humanos , Reprodutibilidade dos Testes , Transdutores
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA